skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Schaub, Florian"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Describing Privacy Enhancing Technologies (PETs) to the general public is challenging but essential to convey the privacy protections they provide. Existing research has explored the explanation of differential privacy in health contexts. Our study adapts well-performing textual descriptions of local differential privacy from prior work to a new context and broadens the investigation to the descriptions of additional PETs. Specifically, we develop user-centric textual descriptions for popular PETs in ad tracking and analytics, including local differential privacy, federated learning with and without local differential privacy, and Google's Topics. We examine the applicability of previous findings to these expanded contexts, and evaluate the PET descriptions with quantitative and qualitative survey data (n=306). We find that adapting a process- and implications-focused approach to the ad tracking and analytics context achieved similar effects in facilitating user understanding compared to health contexts, and that our descriptions developed with this process+implications approach for the additional, understudied PETs help users understand PETs' processes. We also find that incorporating an implications statement into PET descriptions did not hurt user comprehension but also did not achieve a significant positive effect, which contrasts prior findings in health contexts. We note that the use of technical terms as well as the machine learning aspect of PETs, even without delving into specifics, led to confusion for some respondents. Based on our findings, we offer recommendations and insights for crafting effective user-centric descriptions of privacy-enhancing technologies. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  2. Smart home technologies are making their way into families. Parents' and children's shared use of smart home technologies has received growing attention in CSCW and related research communities. Families and children are also frequently featured as target audiences in smart home product marketing. However, there is limited knowledge of how exactly children and family interactions are portrayed in smart home product marketing, and to what extent those portrayals align with the actual consideration of children and families in product features and resources for child safety and privacy. We conducted a content analysis of product websites and online resources of 102 smart home products, as these materials constitute a main marketing channel and information source about products for consumers. We found that despite featuring children in smart home marketing, most analyzed product websites did not mention child safety features and lacked sufficient information on how children's data is collected and used. Specifically, our findings highlight misalignments in three aspects: (1) children are depicted as users of smart home products but there are insufficient child-friendly product features; (2) harmonious child-product co-presence is portrayed but potential child safety issues are neglected; and (3) children are shown as the subject of monitoring and datafication but there is limited information on child data collection and use. We discuss how parent-child relationships and parenting may be negatively impacted by such marketing depictions, and we provide design and policy recommendations for better incorporating child safety and privacy considerations into smart home products. 
    more » « less
  3. Companies increasingly use virtual reality (VR) for advertising. This begs the question, What risks does VR advertising pose for consumers? We analyze VR marketing experiences to identify risks and discuss opportunities to address those and future risks in VR advertising. 
    more » « less
  4. Privacy policies are crucial for informing users about data practices, yet their length and complexity often deter users from reading them. In this paper, we propose an automated approach to identify and visualize data practices within privacy policies at different levels of detail. Leveraging crowd-sourced annotations from the ToS;DR platform, we experiment with various methods to match policy excerpts with predefined data practice descriptions. We further conduct a case study to evaluate our approach on a real-world policy, demonstrating its effectiveness in simplifying complex policies. Experiments show that our approach accurately matches data practice descriptions with policy excerpts, facilitating the presentation of simplified privacy information to users. 
    more » « less
  5. Companies' privacy policies and their contents are being analyzed for many reasons, including to assess the readability, usability, and utility of privacy policies; to extract and analyze data practices of apps and websites; to assess compliance of companies with relevant laws and their own privacy policies, and to develop tools and machine learning models to summarize and read policies. Despite the importance and interest in studying privacy policies from researchers, regulators, and privacy activists, few best practices or approaches have emerged and infrastructure and tool support is scarce or scattered. In order to provide insight into how researchers study privacy policies and the challenges they face when doing so, we conducted 26 interviews with researchers from various disciplines who have conducted research on privacy policies. We provide insights on a range of challenges around policy selection, policy retrieval, and policy content analysis, as well as multiple overarching challenges researchers experienced across the research process. Based on our findings, we discuss opportunities to better facilitate privacy policy research, including research directions for methodologically advancing privacy policy analysis, potential structural changes around privacy policies, and avenues for fostering an interdisciplinary research community and maturing the field. 
    more » « less
  6. The General Data Protection Regulation (GDPR) and other recent privacy laws require organizations to post their privacy policies, and place specific expectations on organisations' privacy practices. Privacy policies take the form of documents written in natural language, and one of the expectations placed upon them is that they remain up to date. To investigate legal compliance with this recency requirement at a large scale, we create a novel pipeline that includes crawling, regex-based extraction, candidate date classification and date object creation to extract updated and effective dates from privacy policies written in English. We then analyze patterns in policy dates using four web crawls and find that only about 40% of privacy policies online contain a date, thereby making it difficult to assess their regulatory compliance. We also find that updates in privacy policies are temporally concentrated around passage of laws regulating digital privacy (such as the GDPR), and that more popular domains are more likely to have policy dates as well as more likely to update their policies regularly. 
    more » « less
  7. Workplaces are increasingly adopting emotion AI, promising benefits to organizations. However, little is known about the perceptions and experiences of workers subject to emotion AI in the workplace. Our interview study with (n=15) US adult workers addresses this gap, finding that (1) participants viewed emotion AI as a deep privacy violation over the privacy of workers’ sensitive emotional information; (2) emotion AI may function to enforce workers’ compliance with emotional labor expectations, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions; (3) workers may be exposed to a wide range of harms as a consequence of emotion AI in the workplace. Findings reveal the need to recognize and define an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace. 
    more » « less